Microsoft Tay was a short lived Twitter bot released in 2016 that incorporated machine learning. It was shut down within days as rapidly begabn using offensive language that ot was learning from huma use of the platform. It was an early warning of the potential dangers of unrestricted learning from human behaviour. Current chatbots based on LLMs have guards in place to prevent certain kinds of response. One of the reasons that Tay so rapidly learnt bad behaviour is that on social media it is often the most inflammatory posts that get most traction and re-posting. This is deliberately exploited by bots disseminating misinformation.
Used in Chap. 20: page 337